Neuro-symbolic Natural Logic with Introspective Revision for Natural Language Inference

نویسندگان

چکیده

Abstract We introduce a neuro-symbolic natural logic framework based on reinforcement learning with introspective revision. The model samples and rewards specific reasoning paths through policy gradient, in which the revision algorithm modifies intermediate symbolic steps to discover reward-earning operations as well leverages external knowledge alleviate spurious training inefficiency. is supported by properly designed local relation models avoid input entangling, helps ensure interpretability of proof paths. proposed has built-in shows superior capability monotonicity inference, systematic generalization, interpretability, compared previous existing datasets.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Natural logic and natural language inference

We propose a model of natural language inference which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation. We extend past work in natural logic, which has focused on semantic containment and monotonicity, by incorporating both semantic exclusion and implicativity. Our model decomposes an inference problem into a sequence of atomic edits lin...

متن کامل

Natural Logic for Natural Language

For a cognitive account of reasoning it is useful to factor out the syntactic aspect — the aspect that has to do with pattern matching and simple substitution — from the rest. The calculus of monotonicity, alias the calculus of natural logic, does precisely this, for it is a calculus of appropriate substitutions at marked positions in syntactic structures. We first introduce the semantic and th...

متن کامل

Natural Logic for Textual Inference

This paper presents the first use of a computational model of natural logic—a system of logical inference which operates over natural language—for textual inference. Most current approaches to the PASCAL RTE textual inference task achieve robustness by sacrificing semantic precision; while broadly effective, they are easily confounded by ubiquitous inferences involving monotonicity. At the othe...

متن کامل

Efficient Markov Logic Inference for Natural Language Semantics

Using Markov logic to integrate logical and distributional information in natural-language semantics results in complex inference problems involving long, complicated formulae. Current inference methods for Markov logic are ineffective on such problems. To address this problem, we propose a new inference algorithm based on SampleSearch that computes probabilities of complete formulae rather tha...

متن کامل

Symbolic Representation and Natural Language

The notion of symbolizability is taken as the second requisite of computation (the first being 'algorithmizability'), and it is shown that symbols, qua symbols, are not symbolizable. This has farreaching consequences for the computational study of language and for Al-research in language understanding. The representation hypothesis is formulated, and its various assumptions and goals are examin...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Transactions of the Association for Computational Linguistics

سال: 2022

ISSN: ['2307-387X']

DOI: https://doi.org/10.1162/tacl_a_00458